-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GPU] Periodic Coverity roundup #2840
base: main
Are you sure you want to change the base?
Conversation
c159499
to
7bd3419
Compare
7bd3419
to
5379609
Compare
5379609
to
b006734
Compare
@@ -133,7 +133,7 @@ class reduce_impl_t { | |||
auto a_blocks = a.blocks(); | |||
a_blocks.erase(a_blocks.begin()); | |||
a = layout_t(a.type(), a.ndims(), 0, a_blocks); | |||
return find_1d_tile(a, b); | |||
return find_1d_tile(std::move(a), std::move(b)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the signature of the function can be changed instead:
tensor_t find_1d_tile(const layout_t &a, const layout_t &b) const {
rather than doing move
s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't make that change, as find_1d_tile
requires mutable layout_t
s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be wrong, but it looks like when a
gets re-created, just a new object will do fine, because the output is a different type and doesn't seem to have a connection with a
lifetime (as argument makes a copy of it).
7ce3826
to
f184df4
Compare
f184df4
to
89155b5
Compare
make test |
Addresses most low severity Coverity hits for GPU and GEMM components.